Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS Negl Trop Dis ; 17(4): e0010862, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-37043542

RESUMEN

Phlebotomine sand flies are of global significance as important vectors of human disease, transmitting bacterial, viral, and protozoan pathogens, including the kinetoplastid parasites of the genus Leishmania, the causative agents of devastating diseases collectively termed leishmaniasis. More than 40 pathogenic Leishmania species are transmitted to humans by approximately 35 sand fly species in 98 countries with hundreds of millions of people at risk around the world. No approved efficacious vaccine exists for leishmaniasis and available therapeutic drugs are either toxic and/or expensive, or the parasites are becoming resistant to the more recently developed drugs. Therefore, sand fly and/or reservoir control are currently the most effective strategies to break transmission. To better understand the biology of sand flies, including the mechanisms involved in their vectorial capacity, insecticide resistance, and population structures we sequenced the genomes of two geographically widespread and important sand fly vector species: Phlebotomus papatasi, a vector of Leishmania parasites that cause cutaneous leishmaniasis, (distributed in Europe, the Middle East and North Africa) and Lutzomyia longipalpis, a vector of Leishmania parasites that cause visceral leishmaniasis (distributed across Central and South America). We categorized and curated genes involved in processes important to their roles as disease vectors, including chemosensation, blood feeding, circadian rhythm, immunity, and detoxification, as well as mobile genetic elements. We also defined gene orthology and observed micro-synteny among the genomes. Finally, we present the genetic diversity and population structure of these species in their respective geographical areas. These genomes will be a foundation on which to base future efforts to prevent vector-borne transmission of Leishmania parasites.


Asunto(s)
Leishmania , Leishmaniasis Cutánea , Phlebotomus , Psychodidae , Animales , Humanos , Phlebotomus/parasitología , Psychodidae/parasitología , Leishmania/genética , Genómica
2.
PeerJ Comput Sci ; 8: e963, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35634111

RESUMEN

Research software is a critical component of contemporary scholarship. Yet, most research software is developed and managed in ways that are at odds with its long-term sustainability. This paper presents findings from a survey of 1,149 researchers, primarily from the United States, about sustainability challenges they face in developing and using research software. Some of our key findings include a repeated need for more opportunities and time for developers of research software to receive training. These training needs cross the software lifecycle and various types of tools. We also identified the recurring need for better models of funding research software and for providing credit to those who develop the software so they can advance in their careers. The results of this survey will help inform future infrastructure and service support for software developers and users, as well as national research policy aimed at increasing the sustainability of research software.

3.
F1000Res ; 9: 1192, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33214878

RESUMEN

Background: Software is now ubiquitous within research. In addition to the general challenges common to all software development projects, research software must also represent, manipulate, and provide data for complex theoretical constructs. Ensuring this process of theory-software translation is robust is essential to maintaining the integrity of the science resulting from it, and yet there has been little formal recognition or exploration of the challenges associated with it. Methods: We thematically analyse the outputs of the discussion sessions at the Theory-Software Translation Workshop 2019, where academic researchers and research software engineers from a variety of domains, and with particular expertise in high performance computing, explored the process of translating between scientific theory and software. Results: We identify a wide range of challenges to implementing scientific theory in research software and using the resulting data and models for the advancement of knowledge. We categorise these within the emergent themes of design, infrastructure, and culture, and map them to associated research questions. Conclusions: Systematically investigating how software is constructed and its outputs used within science has the potential to improve the robustness of research software and accelerate progress in its development. We propose that this issue be examined within a new research area of theory-software translation, which would aim to significantly advance both knowledge and scientific practice.


Asunto(s)
Metodologías Computacionales , Programas Informáticos , Ingeniería , Humanos , Conocimiento , Investigadores
4.
Wellcome Open Res ; 5: 267, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33501381

RESUMEN

The systemic challenges of the COVID-19 pandemic require cross-disciplinary collaboration in a global and timely fashion. Such collaboration needs open research practices and the sharing of research outputs, such as data and code, thereby facilitating research and research reproducibility and timely collaboration beyond borders. The Research Data Alliance COVID-19 Working Group recently published a set of recommendations and guidelines on data sharing and related best practices for COVID-19 research. These guidelines include recommendations for clinicians, researchers, policy- and decision-makers, funders, publishers, public health experts, disaster preparedness and response experts, infrastructure providers from the perspective of different domains (Clinical Medicine, Omics, Epidemiology, Social Sciences, Community Participation, Indigenous Peoples, Research Software, Legal and Ethical Considerations), and other potential users. These guidelines include recommendations for researchers, policymakers, funders, publishers and infrastructure providers from the perspective of different domains (Clinical Medicine, Omics, Epidemiology, Social Sciences, Community Participation, Indigenous Peoples, Research Software, Legal and Ethical Considerations). Several overarching themes have emerged from this document such as the need to balance the creation of data adherent to FAIR principles (findable, accessible, interoperable and reusable), with the need for quick data release; the use of trustworthy research data repositories; the use of well-annotated data with meaningful metadata; and practices of documenting methods and software. The resulting document marks an unprecedented cross-disciplinary, cross-sectoral, and cross-jurisdictional effort authored by over 160 experts from around the globe. This letter summarises key points of the Recommendations and Guidelines, highlights the relevant findings, shines a spotlight on the process, and suggests how these developments can be leveraged by the wider scientific community.

5.
Expert Opin Drug Discov ; 14(1): 9-22, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30484337

RESUMEN

INTRODUCTION: Computational chemistry dramatically accelerates the drug discovery process and high-performance computing (HPC) can be used to speed up the most expensive calculations. Supporting a local HPC infrastructure is both costly and time-consuming, and, therefore, many research groups are moving from in-house solutions to remote-distributed computing platforms. Areas covered: The authors focus on the use of distributed technologies, solutions, and infrastructures to gain access to HPC capabilities, software tools, and datasets to run the complex simulations required in computational drug discovery (CDD). Expert opinion: The use of computational tools can decrease the time to market of new drugs. HPC has a crucial role in handling the complex algorithms and large volumes of data required to achieve specificity and avoid undesirable side-effects. Distributed computing environments have clear advantages over in-house solutions in terms of cost and sustainability. The use of infrastructures relying on virtualization reduces set-up costs. Distributed computing resources can be difficult to access, although web-based solutions are becoming increasingly available. There is a trade-off between cost-effectiveness and accessibility in using on-demand computing resources rather than free/academic resources. Graphics processing unit computing, with its outstanding parallel computing power, is becoming increasingly important.


Asunto(s)
Química Computacional/métodos , Simulación por Computador , Descubrimiento de Drogas/métodos , Algoritmos , Animales , Metodologías Computacionales , Humanos , Programas Informáticos , Factores de Tiempo
7.
J Cheminform ; 8: 58, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27818709

RESUMEN

BACKGROUND: In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. SIGNIFICANCE: Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. CONCLUSIONS: We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.

8.
Springerplus ; 5(1): 1300, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27547674

RESUMEN

BACKGROUND: Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. RESULTS: To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. CONCLUSIONS: Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.

9.
BMC Bioinformatics ; 17: 127, 2016 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-26968893

RESUMEN

BACKGROUND: Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. RESULTS: We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. CONCLUSIONS: Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.


Asunto(s)
Biología Computacional/métodos , Redes de Comunicación de Computadores , Microcomputadores , Programas Informáticos , Flujo de Trabajo , Reproducibilidad de los Resultados
10.
Curr Drug Targets ; 17(14): 1649-1660, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26844570

RESUMEN

Virtual screening for active compounds has become an essential step within the drug development pipeline. The computer based prediction of compound binding modes is one of the most time and cost efficient methods for screening ligand libraries and enrich results of potential drugs. Here we present an overview about currently available online resources regarding compound databases, docking applications, and science gateways for drug discovery and virtual screening, in order to help structural biologists in choosing the best tools for their analysis. The appearance of the user interface, authentication and security aspects, data management, and computational performance will be discussed. We anticipate a broad overview about currently available solutions, guiding computational chemists and users from related fields towards scientifically reliable results.


Asunto(s)
Descubrimiento de Drogas/métodos , Proteínas/química , Proteínas/metabolismo , Simulación por Computador , Bases de Datos de Compuestos Químicos , Humanos , Internet , Ligandos , Modelos Moleculares , Simulación del Acoplamiento Molecular , Interfaz Usuario-Computador
11.
Nucleic Acids Res ; 43(Database issue): D707-13, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25510499

RESUMEN

VectorBase is a National Institute of Allergy and Infectious Diseases supported Bioinformatics Resource Center (BRC) for invertebrate vectors of human pathogens. Now in its 11th year, VectorBase currently hosts the genomes of 35 organisms including a number of non-vectors for comparative analysis. Hosted data range from genome assemblies with annotated gene features, transcript and protein expression data to population genetics including variation and insecticide-resistance phenotypes. Here we describe improvements to our resource and the set of tools available for interrogating and accessing BRC data including the integration of Web Apollo to facilitate community annotation and providing Galaxy to support user-based workflows. VectorBase also actively supports our community through hands-on workshops and online tutorials. All information and data are freely available from our website at https://www.vectorbase.org/.


Asunto(s)
Bases de Datos Genéticas , Vectores de Enfermedades , Genómica , Animales , Ontologías Biológicas , Perfilación de la Expresión Génica , Variación Genética , Genoma , Humanos , Resistencia a los Insecticidas , Internet , Invertebrados/genética , Redes y Vías Metabólicas/genética
12.
Biomed Res Int ; 2014: 134023, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25254202

RESUMEN

The explosion of the data both in the biomedical research and in the healthcare systems demands urgent solutions. In particular, the research in omics sciences is moving from a hypothesis-driven to a data-driven approach. Healthcare is additionally always asking for a tighter integration with biomedical data in order to promote personalized medicine and to provide better treatments. Efficient analysis and interpretation of Big Data opens new avenues to explore molecular biology, new questions to ask about physiological and pathological states, and new ways to answer these open issues. Such analyses lead to better understanding of diseases and development of better and personalized diagnostics and therapeutics. However, such progresses are directly related to the availability of new solutions to deal with this huge amount of information. New paradigms are needed to store and access data, for its annotation and integration and finally for inferring knowledge and making it available to researchers. Bioinformatics can be viewed as the "glue" for all these processes. A clear awareness of present high performance computing (HPC) solutions in bioinformatics, Big Data analysis paradigms for computational biology, and the issues that are still open in the biomedical and healthcare fields represent the starting point to win this challenge.


Asunto(s)
Investigación Biomédica , Biología Computacional/métodos , Minería de Datos/métodos , Atención a la Salud , Humanos , Medicina de Precisión , Programas Informáticos
13.
Biomed Res Int ; 2014: 624024, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25032219

RESUMEN

Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly.


Asunto(s)
Bases de Datos de Proteínas , Descubrimiento de Drogas/métodos , Simulación del Acoplamiento Molecular/métodos , Inhibidores de Proteínas Quinasas/química , Proteínas Quinasas/química
15.
J Chem Theory Comput ; 10(6): 2232-45, 2014 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-26580747

RESUMEN

The MoSGrid portal offers an approach to carry out high-quality molecular simulations on distributed compute infrastructures to scientists with all kinds of background and experience levels. A user-friendly Web interface guarantees the ease-of-use of modern chemical simulation applications well established in the field. The usage of well-defined workflows annotated with metadata largely improves the reproducibility of simulations in the sense of good lab practice. The MoSGrid science gateway supports applications in the domains quantum chemistry (QC), molecular dynamics (MD), and docking. This paper presents the open-source MoSGrid architecture as well as lessons learned from its design.

17.
Stud Health Technol Inform ; 175: 142-51, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22942005

RESUMEN

The new science gateway MoSGrid (Molecular Simulation Grid) enables users to submit and process molecular simulation studies on a large scale. A conformational analysis of guanidine zinc complexes, which are active catalysts in the ring-opening polymerization of lactide, is presented as an example. Such a large-scale quantum chemical study is enabled by workflow technologies. Two times 40 conformers have been generated, for two guanidine zinc complexes. Their structures were optimized using Gaussian03 and the energies processed within the quantum chemistry portlet of the MoSGrid portal. All meta- and post-processing steps have been performed in this portlet. All workflow features are implemented via WS-PGRADE and submitted to UNICORE.


Asunto(s)
Guanidina/química , Almacenamiento y Recuperación de la Información/métodos , Internet , Modelos Moleculares , Ciencia , Interfaz Usuario-Computador , Zinc/química , Simulación por Computador , Investigación sobre Servicios de Salud/métodos , Difusión de la Información/métodos , Conformación Molecular , Flujo de Trabajo
18.
Genome Biol ; 10(9): R98, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19761611

RESUMEN

Genome resequencing with short reads generally relies on alignments against a single reference. GenomeMapper supports simultaneous mapping of short reads against multiple genomes by integrating related genomes (e.g., individuals of the same species) into a single graph structure. It constitutes the first approach for handling multiple references and introduces representations for alignments against complex structures. Demonstrated benefits include access to polymorphisms that cannot be identified by alignments against the reference alone. Download GenomeMapper at http://1001genomes.org.


Asunto(s)
Alineación de Secuencia/métodos , Análisis de Secuencia de ADN/métodos , Programas Informáticos , Algoritmos , Secuencia de Bases , Biología Computacional/métodos , Genoma/genética , Genómica/métodos , Datos de Secuencia Molecular , Reproducibilidad de los Resultados , Homología de Secuencia de Ácido Nucleico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...